3d Semantic Segmentation


3D Semantic Segmentation is a computer vision task that involves dividing a 3D point cloud or 3D mesh into semantically meaningful parts or regions. The goal of 3D semantic segmentation is to identify and label different objects and parts within a 3D scene, which can be used for applications such as robotics, autonomous driving, and augmented reality.

The Devil is in the Details: Simple Remedies for Image-to-LiDAR Representation Learning

Add code
Jan 16, 2025
Viaarxiv icon

Unified Few-shot Crack Segmentation and its Precise 3D Automatic Measurement in Concrete Structures

Add code
Jan 15, 2025
Viaarxiv icon

Vision Foundation Models for Computed Tomography

Add code
Jan 15, 2025
Viaarxiv icon

Skip Mamba Diffusion for Monocular 3D Semantic Scene Completion

Add code
Jan 13, 2025
Viaarxiv icon

The 2nd Place Solution from the 3D Semantic Segmentation Track in the 2024 Waymo Open Dataset Challenge

Add code
Jan 06, 2025
Viaarxiv icon

Advancing ALS Applications with Large-Scale Pre-training: Dataset Development and Downstream Assessment

Add code
Jan 09, 2025
Viaarxiv icon

IPDN: Image-enhanced Prompt Decoding Network for 3D Referring Expression Segmentation

Add code
Jan 09, 2025
Viaarxiv icon

LiMoE: Mixture of LiDAR Representation Learners from Automotive Scenes

Add code
Jan 07, 2025
Viaarxiv icon

Gaussian Masked Autoencoders

Add code
Jan 06, 2025
Viaarxiv icon

LargeAD: Large-Scale Cross-Sensor Data Pretraining for Autonomous Driving

Add code
Jan 07, 2025
Viaarxiv icon